Predicting Sequential Data with LSTMs Augmented with Strictly 2-Piecewise Input Vectors

نویسندگان

  • Chihiro Shibata
  • Jeffrey Heinz
چکیده

Recurrent neural networks such as Long-Short Term Memory (LSTM) are often used to learn from various kinds of time-series data, especially those that involved long-distance dependencies. We introduce a vector representation for the Strictly 2-Piecewise (SP-2) formal languages, which encode certain kinds of long-distance dependencies using subsequences. These vectors are added to the LSTM architecture as an additional input. Through experiments with the problems in the SPiCe dataset (Balle Pigem et al., 2016), we demonstrate that for certain problems, these vectors slightly—but significantly—improve the top-5 score (normalized discounted cumulative gain) as well as the accuracy as compared to the LSTM architecture without the SP-2 input vector. These results are also compared to an LSTM architecture with an input vector based on bigrams.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Proposing Multimodal Integration Model Using LSTM and Autoencoder

We propose an architecture of neural network that can learn and integrate sequential multimodal information using Long Short Term Memory. Our model consists of encoder and decoder LSTMs and multimodal autoencoder. For integrating sequential multimodal information, firstly, the encoder LSTM encodes a sequential input to a fixed range feature vector for each modality. Secondly, the multimodal aut...

متن کامل

Predicting the Efficiency of Decision-Making Unit by Using Piecewise Polynomial Extrapolation in Different Times

In this article, we will estimate efficiency amountof decision-making unit by offering the continuous piecewise polynomialextrapolation and interpolation by CCR model input-oriented on the assumptionthat it is constant returns to scale in different times. And finally, we willestimate efficiency amount of decision-making unit indifferent times byoffering an example.

متن کامل

Head-Lexicalized Bidirectional Tree LSTMs

Sequential LSTMs have been extended to model tree structures, giving competitive results for a number of tasks. Existing methods model constituent trees by bottom-up combinations of constituent nodes, making direct use of input word information only for leaf nodes. This is different from sequential LSTMs, which contain references to input words for each node. In this paper, we propose a method ...

متن کامل

Measuring robust overall profit efficiency with uncertainty in input and output price vectors

The classic overall profit needs precise information of inputs, outputs, inputs and outputs price vectors. In real word, all data are not certain. Therefore, in this case, stochastic and fuzzy methods use for measuring overall profit efficiency. These methods require more information about the data such as probability distribution function or data membership function, which in some cases may no...

متن کامل

Bidirectional Tree-Structured LSTM with Head Lexicalization

Sequential LSTM has been extended to model tree structures, giving competitive results for a number of tasks. Existing methods model constituent trees by bottom-up combinations of constituent nodes, making direct use of input word information only for leaf nodes. This is different from sequential LSTMs, which contain reference to input words for each node. In this paper, we propose a method for...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2016